17 research outputs found

    A character-level error analysis technique for evaluating text entry methods

    Get PDF

    Comparing Smartphone Speech Recognition and Touchscreen Typing for Composition and Transcription

    Get PDF
    International audienceRuan et al. found transcribing short phrases with speech recognition nearly 200% faster than typing on a smartphone. We extend this comparison to a novel composition task, using a protocol that enables a controlled comparison with transcription. Results show that both composing and transcribing with speech is faster than typing. But, the magnitude of this difference is lower with composition, and speech has a lower error rate than keyboard during composition, but not during transcription. When transcribing, speech outperformed typing in most NASA-TLX measures, but when composing, there were no significant differences between typing and speech for any measure except physical demand

    Recent developments in text-entry error rate measurement

    No full text
    Previously, we defined robust and easy-to-calculate error metrics for text entry research. Herein, we announce a software implementation of this error analysis technique. We build on previous work, by introducing two new metrics, and we extend error rate analyses to high keystroke-per-character entry techniques, such as Multi-Tap. ACM Classification Keywords H.1.2. User/Machine Systems (Human Factors

    Metrics for text entry research: An evaluation of Msd And Kspc, and . . .

    No full text
    We describe and identify shortcomings in two statistics recently introduced to measure accuracy in text entry evaluations: the minimum string distance (MSD) error rate and keystrokes per character (KSPC). To overcome the weaknesses, a new framework for error analysis is developed and demonstrated. It combines the analysis of the presented text, input stream (keystrokes), and transcribed text. New statistics include a unified total error rate, combining two constituent error rates: the corrected error rate (errors committed but corrected) and the not corrected error rate (errors left in the transcribed text). The framework includes other measures including error correction efficiency, participant conscientiousness, utilised bandwidth, and wasted bandwidth. A text entry study demonstrating the new methodology is described
    corecore